126 research outputs found

    Representing LLVM-IR in a Code Property Graph

    Full text link
    In the past years, a number of static application security testing tools have been proposed which make use of so-called code property graphs, a graph model which keeps rich information about the source code while enabling its user to write language-agnostic analyses. However, they suffer from several shortcomings. They work mostly on source code and exclude the analysis of third-party dependencies if they are only available as compiled binaries. Furthermore, they are limited in their analysis to whether an individual programming language is supported or not. While often support for well-established languages such as C/C++ or Java is included, languages that are still heavily evolving, such as Rust, are not considered because of the constant changes in the language design. To overcome these limitations, we extend an open source implementation of a code property graph to support LLVM-IR which can be used as output by many compilers and binary lifters. In this paper, we discuss how we address challenges that arise when mapping concepts of an intermediate representation to a CPG. At the same time, we optimize the resulting graph to be minimal and close to the representation of equivalent source code. Our evaluation indicates that existing analyses can be reused without modifications and that the performance requirements are comparable to operating on source code. This makes the approach suitable for an analysis of large-scale projects

    ZKlaims: Privacy-preserving Attribute-based Credentials using Non-interactive Zero-knowledge Techniques

    Full text link
    In this paper we present ZKlaims: a system that allows users to present attribute-based credentials in a privacy-preserving way. We achieve a zero-knowledge property on the basis of Succinct Non-interactive Arguments of Knowledge (SNARKs). ZKlaims allow users to prove statements on credentials issued by trusted third parties. The credential contents are never revealed to the verifier as part of the proving process. Further, ZKlaims can be presented non-interactively, mitigating the need for interactive proofs between the user and the verifier. This allows ZKlaims to be exchanged via fully decentralized services and storages such as traditional peer-to-peer networks based on distributed hash tables (DHTs) or even blockchains. To show this, we include a performance evaluation of ZKlaims and show how it can be integrated in decentralized identity provider services.Comment: 8 pages, published at SECRYPT 201

    Towards Tracking Data Flows in Cloud Architectures

    Full text link
    As cloud services become central in an increasing number of applications, they process and store more personal and business-critical data. At the same time, privacy and compliance regulations such as GDPR, the EU ePrivacy regulation, PCI, and the upcoming EU Cybersecurity Act raise the bar for secure processing and traceability of critical data. Especially the demand to provide information about existing data records of an individual and the ability to delete them on demand is central in privacy regulations. Common to these requirements is that cloud providers must be able to track data as it flows across the different services to ensure that it never moves outside of the legitimate realm, and it is known at all times where a specific copy of a record that belongs to a specific individual or business process is located. However, current cloud architectures do neither provide the means to holistically track data flows across different services nor to enforce policies on data flows. In this paper, we point out the deficits in the data flow tracking functionalities of major cloud providers by means of a set of practical experiments. We then generalize from these experiments introducing a generic architecture that aims at solving the problem of cloud-wide data flow tracking and show how it can be built in a Kubernetes-based prototype implementation.Comment: 11 pages, 5 figures, 2020 IEEE 13th International Conference on Cloud Computing (CLOUD

    Poster: Patient Community -- A Test Bed For Privacy Threat Analysis

    Full text link
    Research and development of privacy analysis tools currently suffers from a lack of test beds for evaluation and comparison of such tools. In this work, we propose a benchmark application that implements an extensive list of privacy weaknesses based on the LINDDUN methodology. It represents a social network for patients whose architecture has first been described in an example analysis conducted by one of the LINDDUN authors. We have implemented this architecture and extended it with more privacy threats to build a test bed that enables comprehensive and independent testing of analysis tools.Comment: 3 pages, 1 figur

    Computer-Assisted Resection and Reconstruction of Pelvic Tumor Sarcoma

    Get PDF
    Pelvic sarcoma is associated with a relatively poor prognosis, due to the difficulty in obtaining an adequate surgical margin given the complex pelvic anatomy. Magnetic resonance imaging and computerized tomography allow valuable surgical resection planning, but intraoperative localization remains hazardous. Surgical navigation systems could be of great benefit in surgical oncology, especially in difficult tumor location; however, no commercial surgical oncology software is currently available. A customized navigation software was developed and used to perform a synovial sarcoma resection and allograft reconstruction. The software permitted preoperative planning with defined target planes and intraoperative navigation with a free-hand saw blade. The allograft was cut according to the same planes. Histological examination revealed tumor-free resection margins. Allograft fitting to the pelvis of the patient was excellent and allowed stable osteosynthesis. We believe this to be the first case of combined computer-assisted tumor resection and reconstruction with an allograft

    Selection of massive bone allografts using shape-matching 3-dimensional registration

    Get PDF
    Background and purpose Massive bone allografts are used when surgery causes large segmental defects. Shape-matching is the primary criterion for selection of an allograft. The current selection method, based on 2-dimensional template comparison, is inefficient for 3-dimensional complex bones. We have analyzed a 3-dimensional (3-D) registration method to match the anatomy of the allograft with that of the recipient

    Physical mixing effects on iron biogeochemical cycling: FeCycle experiment

    Get PDF
    The effects of physical processes on the distribution, speciation, and sources/sinks for Fe in a high-nutrient low-chlorophyll (HNLC) region were assessed during FeCycle, a mesoscale SF6 tracer release during February 2003 (austral summer) to the SE of New Zealand. Physical mixing processes were prevalent during FeCycle with rapid patch growth (strain rate γ = 0.17–0.20 d−1) from a circular shape (50 km2) into a long filament of ∼400 km2 by day 10. Slippage between layers saw the patch-head overlying noninfused waters while the tail was capped by adjacent surface waters resulting in a SF6 maximum at depth. As the patch developed it entrained adjacent waters containing higher chlorophyll concentrations, but similar dissolved iron (DFe) levels, than the initial infused patch. DFe was low ∼60 pmol L−1 in surface waters during FeCycle and was dominated by organic complexation. Nighttime measurements of Fe(II) ∼20 pmol L−1 suggest the presence of Fe(II) organic complexes in the absence of an identifiable fast Fe(III) reduction process. Combining residence times and phytoplankton uptake fluxes for DFe it is cycled through the biota 140–280 times before leaving the winter mixed layer (WML). This strong Fe demand throughout the euphotic zone coupled with the low Fe:NO3 − (11.9 μmol:mol) below the ferricline suggests that vertical diffusion of Fe is insufficient to relieve chronic iron limitation, indicating the importance of atmospheric inputs of Fe to this region

    Extending the Implicit Association Test (IAT): Assessing Consumer Attitudes Based on Multi-Dimensional Implicit Associations

    Get PDF
    Background: The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings: In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance: This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2,3].Product Innovation ManagementIndustrial Design Engineerin
    corecore